-
Notifications
You must be signed in to change notification settings - Fork 3.3k
feat(models): updated model configs, updated anthropic provider to propagate errors back to user if any #3159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…opagate errors back to user if any
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile OverviewGreptile SummaryUpdated model configurations and improved error handling across multiple AI providers. Key changes include: Model Configuration Updates
Anthropic Provider Improvements
OpenAI/Azure Provider Improvements
Other Provider Updates
Type Safety & Code Quality
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant AgentBlock
participant AgentHandler
participant ProviderRegistry
participant AnthropicProvider
participant OpenAIProvider
participant ModelConfig
User->>AgentBlock: Configure agent (model, thinkingLevel, reasoningEffort, etc)
AgentBlock->>AgentHandler: Execute with inputs
AgentHandler->>AgentHandler: Build provider request
Note over AgentHandler: Add thinkingLevel parameter
AgentHandler->>ProviderRegistry: Route request to provider
alt Anthropic Provider
ProviderRegistry->>AnthropicProvider: executeRequest()
AnthropicProvider->>ModelConfig: getMaxOutputTokensForModel()
ModelConfig-->>AnthropicProvider: maxTokens (simplified from object to number)
AnthropicProvider->>AnthropicProvider: Build thinking config (if thinkingLevel !== 'none')
Note over AnthropicProvider: Check compatibility:<br/>- Adjust max_tokens for budget_tokens<br/>- Set temperature=undefined<br/>- Limit tool_choice to 'auto'/'none'
AnthropicProvider->>AnthropicProvider: createMessage() helper
Note over AnthropicProvider: Auto-switch to streaming<br/>if max_tokens > 21333
AnthropicProvider->>AnthropicProvider: Preserve thinking blocks in tool loops
AnthropicProvider-->>AgentHandler: Response with enhanced error handling
else OpenAI/Azure Provider
ProviderRegistry->>OpenAIProvider: executeRequest()
OpenAIProvider->>OpenAIProvider: Skip 'auto' for reasoningEffort/verbosity
Note over OpenAIProvider: Prevent invalid API values
OpenAIProvider-->>AgentHandler: Response with improved typing
end
AgentHandler-->>User: Final response with tokens, cost, timing
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5 files reviewed, no comments
|
@greptile |
|
@cursor review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6 files reviewed, 1 comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Bugbot reviewed your changes and found no new issues!
Comment @cursor review or bugbot run to trigger another review on this PR
ac6420b to
d431c8d
Compare
Summary
Type of Change
Testing
Tested manually
Checklist